292 research outputs found

    On Perceptual Distortion Measures and Parametric Modeling

    Get PDF

    An Exact Subspace Method for Fundamental Frequency Estimation

    Get PDF

    Accurate Estimation of Low Fundamental Frequencies from Real-Valued Measurements

    Get PDF

    Multi-Channel Maximum Likelihood Pitch Estimation

    Get PDF
    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteris-tics. This essentially means that the model allows for differ-ent conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence suited for a large class of problems. Simulations with real signals shows that the method outperforms a state-of-the-art multi-channel method in terms of gross error rate. Index Terms — Pitch estimation, microphone arrays, multi-channel audi

    Model-based Analysis and Processing of Speech and Audio Signals

    Get PDF

    On The Estimation of Low Fundamental Frequencies

    Get PDF

    Privacy-Preserving Distributed Optimization via Subspace Perturbation: A General Framework

    Get PDF
    As the modern world becomes increasingly digitized and interconnected, distributed signal processing has proven to be effective in processing its large volume of data. However, a main challenge limiting the broad use of distributed signal processing techniques is the issue of privacy in handling sensitive data. To address this privacy issue, we propose a novel yet general subspace perturbation method for privacy-preserving distributed optimization, which allows each node to obtain the desired solution while protecting its private data. In particular, we show that the dual variables introduced in each distributed optimizer will not converge in a certain subspace determined by the graph topology. Additionally, the optimization variable is ensured to converge to the desired solution, because it is orthogonal to this non-convergent subspace. We therefore propose to insert noise in the non-convergent subspace through the dual variable such that the private data are protected, and the accuracy of the desired solution is completely unaffected. Moreover, the proposed method is shown to be secure under two widely-used adversary models: passive and eavesdropping. Furthermore, we consider several distributed optimizers such as ADMM and PDMM to demonstrate the general applicability of the proposed method. Finally, we test the performance through a set of applications. Numerical tests indicate that the proposed method is superior to existing methods in terms of several parameters like estimated accuracy, privacy level, communication cost and convergence rate

    A New Metric for VQ-based Speech Enhancement and Separation

    Get PDF

    A Privacy-Preserving Asynchronous Averaging Algorithm based on Shamir’s Secret Sharing

    Get PDF
    • …
    corecore